The Average Condition Number of Most Tensor Rank Decomposition Problems is Infinite

نویسندگان

چکیده

Abstract The tensor rank decomposition, or canonical polyadic is the decomposition of a into sum rank-1 tensors. condition number measures sensitivity summands with respect to structured perturbations. Those are perturbations preserving that decomposed. On other hand, angular up scaling. We show for random rank-2 tensors expected value infinite wide range choices density. Under mild additional assumption, we same true most higher ranks $$r\ge 3$$ r ≥ 3 as well. In fact, dimensions tend infinity, asymptotically all covered by our analysis. contrary, have finite number. Based on numerical experiments, conjecture this could also be ranks. Our results underline high computational complexity computing decompositions. discuss consequences algorithm design and testing algorithms

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Tensor rank-one decomposition of probability tables

We propose a new additive decomposition of probability tables tensor rank-one decomposition. The basic idea is to decompose a probability table into a series of tables, such that the table that is the sum of the series is equal to the original table. Each table in the series has the same domain as the original table but can be expressed as a product of one-dimensional tables. Entries in tables ...

متن کامل

groups of infinite rank with a normalizer condition on subgroups

groups of infinite rank in which every subgroup is either normal or self-normalizing are characterized in terms of their subgroups of infinite rank.

متن کامل

InfTucker: t-Process based Infinite Tensor Decomposition

Tensor decomposition is a powerful tool for multiway data analysis. Many popular tensor decomposition approaches—such as the Tucker decomposition and CANDECOMP/PARAFAC (CP)—conduct multi-linear factorization. They are insufficient to model (i) complex interactions between data entities, (ii) various data types (e.g. missing data and binary data), and (iii) noisy observations and outliers. To ad...

متن کامل

Sparse and Low-Rank Tensor Decomposition

Motivated by the problem of robust factorization of a low-rank tensor, we study the question of sparse and low-rank tensor decomposition. We present an efficient computational algorithm that modifies Leurgans’ algoirthm for tensor factorization. Our method relies on a reduction of the problem to sparse and low-rank matrix decomposition via the notion of tensor contraction. We use well-understoo...

متن کامل

0 Most Tensor Problems are NP - Hard

We prove that multilinear (tensor) analogues of many efficiently computable problems in numerical linear algebra are NP-hard. Our list here includes: determining the feasibility of a system of bilinear equations, deciding whether a 3-tensor possesses a given eigenvalue, singular value, or spectral norm; approximating an eigenvalue, eigenvector, singular vector, or the spectral norm; and determi...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Foundations of Computational Mathematics

سال: 2022

ISSN: ['1615-3383', '1615-3375']

DOI: https://doi.org/10.1007/s10208-022-09551-1